1,075 research outputs found
Feasibility analysis of design for remanufacturing in bearing using hybrid fuzzy-topsis and taguchi optimization
The tremendous advancement in technology, productivity and improved standard of living has come at the cost of environmental deterioration, increased energy and raw material consumption. In this regard, remanufacturing is viable option to reduce energy usage, carbon footprint and raw material usage. In this manuscript, using computational intelligence techniques we try to determine the feasibility of remanufacturing in case of roller bearings. We collected used N308 bearings from 5 different Indian cities. Using Fuzzy-TOPSIS, we found that the roundness, surface roughness and weight play a vital role in design for remanufacturing of roller bearings. Change in diameter, change in thickness and change in width showed minimal influence. We also used Taguchi analysis to reassess the problem. The roundness of inner and outer race was found to be the most influential parameters in deciding the selection of bearing for remanufacturing. The results suggest the bearing designer to design the bearing in such a way that roundness of both races will be taken cared while manufacturing a bearing. However, using Taguchi the weight of the rollers was found to be of least influence. Overall, the predictions of Taguchi analysis were found to be similar to Fuzzy-TOPSIS analysis
Criteria for the experimental observation of multi-dimensional optical solitons in saturable media
Criteria for experimental observation of multi-dimensional optical solitons
in media with saturable refractive nonlinearities are developed. The criteria
are applied to actual material parameters (characterizing the cubic
self-focusing and quintic self-defocusing nonlinearities, two-photon loss, and
optical-damage threshold) for various glasses. This way, we identify operation
windows for soliton formation in these glasses. It is found that two-photon
absorption sets stringent limits on the windows. We conclude that, while a
well-defined window of parameters exists for two-dimensional solitons (spatial
or spatiotemporal), for their three-dimensional spatiotemporal counterparts
such a window \emph{does not} exist, due to the nonlinear loss in glasses.Comment: 8 pages, to appear in Phys. Rev.
Recommended from our members
Parallel state-space search for a first solution with consistent linear speedups
Consider the problem of exploring a large state-space for a goal state where although many such states may exist in the state-space, finding any one state satisfying the requirements is sufficient. All the methods known until now for conducting such search in parallel using multiprocessors fail to provide consistent linear speedups over sequential execution. The speedups vary between sublinear to superlinear and from one execution to another. Further, adding more processors may sometimes lead to a slow-down rather than speedup, giving rise to speedup anomalies reported in literature. We present a prioritizing strategy which yields consistent speedups that are close to P with P processors, and that monotonically increase with the addition of processors. This is achieved by keeping the total number of nodes expanded during parallel search very close to that of a sequential search. In addition, the strategy requires substantially smaller memory relative to other methods. The performance of this strategy is demonstrated on a multiprocessor with several state-space search problems.KEY WORDS: Parallel algorithms; parallel depth-first search; first solution; state-space trees; linear speedup
Holistic Slowdown Driven Scheduling and Resource Management for Malleable Jobs
In job scheduling, the concept of malleability has been explored since many
years ago. Research shows that malleability improves system performance, but
its utilization in HPC never became widespread. The causes are the difficulty
in developing malleable applications, and the lack of support and integration
of the different layers of the HPC software stack. However, in the last years,
malleability in job scheduling is becoming more critical because of the
increasing complexity of hardware and workloads. In this context, using nodes
in an exclusive mode is not always the most efficient solution as in
traditional HPC jobs, where applications were highly tuned for static
allocations, but offering zero flexibility to dynamic executions. This paper
proposes a new holistic, dynamic job scheduling policy, Slowdown Driven
(SD-Policy), which exploits the malleability of applications as the key
technology to reduce the average slowdown and response time of jobs. SD-Policy
is based on backfill and node sharing. It applies malleability to running jobs
to make room for jobs that will run with a reduced set of resources, only when
the estimated slowdown improves over the static approach. We implemented
SD-Policy in SLURM and evaluated it in a real production environment, and with
a simulator using workloads of up to 198K jobs. Results show better resource
utilization with the reduction of makespan, response time, slowdown, and energy
consumption, up to respectively 7%, 50%, 70%, and 6%, for the evaluated
workloads
Discovering Valuable Items from Massive Data
Suppose there is a large collection of items, each with an associated cost
and an inherent utility that is revealed only once we commit to selecting it.
Given a budget on the cumulative cost of the selected items, how can we pick a
subset of maximal value? This task generalizes several important problems such
as multi-arm bandits, active search and the knapsack problem. We present an
algorithm, GP-Select, which utilizes prior knowledge about similarity be- tween
items, expressed as a kernel function. GP-Select uses Gaussian process
prediction to balance exploration (estimating the unknown value of items) and
exploitation (selecting items of high value). We extend GP-Select to be able to
discover sets that simultaneously have high utility and are diverse. Our
preference for diversity can be specified as an arbitrary monotone submodular
function that quantifies the diminishing returns obtained when selecting
similar items. Furthermore, we exploit the structure of the model updates to
achieve an order of magnitude (up to 40X) speedup in our experiments without
resorting to approximations. We provide strong guarantees on the performance of
GP-Select and apply it to three real-world case studies of industrial
relevance: (1) Refreshing a repository of prices in a Global Distribution
System for the travel industry, (2) Identifying diverse, binding-affine
peptides in a vaccine de- sign task and (3) Maximizing clicks in a web-scale
recommender system by recommending items to users
Predicting application performance using supervised learning on communication features
Abstract not provide
High Temperature Ferromagnetism with Giant Magnetic Moment in Transparent Co-doped SnO2-d
Occurrence of room temperature ferromagnetism is demonstrated in pulsed laser
deposited thin films of Sn1-xCoxO2-d (x<0.3). Interestingly, films of
Sn0.95Co0.05O2-d grown on R-plane sapphire not only exhibit ferromagnetism with
a Curie temperature close to 650 K, but also a giant magnetic moment of about 7
Bohr-Magneton/Co, not yet reported in any diluted magnetic semiconductor
system. The films are semiconducting and optically highly transparent.Comment: 12 pages, 4 figure
A Comparative Analysis of Load Balancing Algorithms Applied to a Weather Forecast Model
Among the many reasons for load imbalance in weather forecasting models, the dynamic imbalance caused by lo-calized variations on the state of the atmosphere is the hard-est one to handle. As an example, active thunderstorms may substantially increase load at a certain timestep with re-spect to previous timesteps in an unpredictable manner – after all, tracking storms is one of the reasons for running a weather forecasting model. In this paper, we present a com-parative analysis of different load balancing algorithms to deal with this kind of load imbalance. We analyze the im-pact of these strategies on computation and communication and the effects caused by the frequency at which the load balancer is invoked on execution time. This is done with-out any code modification, employing the concept of proces-sor virtualization, which basically means that the domain is over-decomposed and the unit of rebalance is a sub-domain. With this approach, we were able to reduce the execution time of a full, real-world weather model. 1
- …